perm filename IJCAI[ALS,ALS] blob sn#707779 filedate 1983-04-25 generic text, type C, neo UTF8
COMMENT āŠ—   VALID 00003 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002					  A.I. 
C00051 00003	The Dartmouth Summer Research Project on Artificial Intelligence occupied
C00062 ENDMK
CāŠ—;
				  A.I. 
		 Where It Has Been and Where It Is Going 
				   by 
			     Arthur L. Samuel

It is with a profound feeling of personal inadequacy that I address you
today.  While it is true that I have worked in your chosen field, most of
my work was done a long time ago when the total amount of knowledge in the
field of Artificial Intelligence was much less than it is today and it was
much easier to make a contribution.  I agreed to talk only because it
seemed to me that I should be able to view the field of AI research from a
perspective that is denyed a newer worker in the field and perhaps I
should share this view with you. I also have a bone to pick with you and
this is a good place to do it.

So this is not going to be an AI talk but rather a talk about the field of
AI.  Perhaps my talk should be called a meta-AI talk.  Actually, A.I.  is
itself a meta-subject since it traditionally has been concerned with
developing procedures for solving a certain class of problems rather than
with the solving of these problems directly.  So, I suppose, the field of
A.I. is a meta-meta subject and the subject of where this field has been
and where it is going is then a meta-meta-meta subject. So it seems that
my talk is to be a meta-meta-meta-talk.

I have also been told that it might not be amiss if I were to do a bit of
reminiscing since it has been my good fortune to have been involved with
the modern digital computer almost from it inception and since I have been
associated with many of the early workers in AI.  I rather hate to start
reminiscing as this is supposed be a mark of age.  At 81 I am still not
willing to admit that I am getting old.

You know, of course, that there are three ways to tell if a person is
getting old.  In the first place an older person likes to reminisce. I
must plead guilty on this count but then I have always had this failing.
Secondly, old people tend to forget things.  And then there is a third
way---By George, I've forgotten what the third way is.

From one point of view, A.I. started in 1834 or shortly thereafter, when
Charles Babbage suggested the possibility of having his Analytic Engine
play chess. It was not until the 1940's, with the emergence of the modern
digital computer, that we see a renewal of interest in Artificial
Intelligence.  There were, of course, numerious attempts at making
automata of various sorts but must of the time the aim was to fool the
public rather than to get machines to exhibit behavior which if done by
humans would be assumed to involve the use of intelligence. So we can date
The emergence of A.I. as a discipline to sometime in the mid 1940's.

Through a fortuitous combination of circumstances, I became interested in
AI at its very beginning, not in 1834, I hasten to say but in 1947.
Actually, my involvement in computing began
much earlier, in 1924 at MIT when Vanivar Bush got me interested in his
Differential Analyser. One of my first assignments was to see what could
be done in the way of using the differential analyser to solve the
non-linear equations in exterior ballistics, which, at the time were done
by numerical integration.

The task of solving these equations to the desired accuracy of 0.05% on
the Differential Analyser, which was then capable of perhaps 5% accuracy,
seemed quite hopeless to me.  I can well remember my feeling at the time
that this called for a digital solution and that maybe we should be
working on a digital computer. Yes, digital computers were being talked
about in 1924 but very little was being done about them.  Bush, with his
usual enthusiam, soon talked me out of any such then revolutionary ideas
and set me to work, with a few hints as to how the ballistic problem might
be solved.

The approach, which Bush suggested and which I implemented, was to write a
set of difference equations to account for the departure of an actual
trajectory from a simple parabolic path and to solve these difference
equations on the Differential Analyser.  By this method we were able to
achieve an overall accuracy of perhaps 0.25%, not the desired accuracy,
but near enough to hold out hope that the desired accuracy could
ultimately be obtained and enough success to make it possible to get
government support for the development of bigger and better differential
analysers.

This basic solution method was still in use during the war years.
Interestingly enough, it was the general dissatisfaction with the results
then being obtained, on the very much improved differential analysers of
the time, that led Herman Goldstein to press for government support of the
computer work at the University of Pennsylvania.  Contrary to the popular
misconception, the digital computer appeared on the scene to late to have
any effect on the war although there were a few fairly imvolved
calculators that did useful work primarily in the field of cryptology.

In retrospect, I often wonder what would have happened had I been
successful in deflecting Bush from his interest in the Differential
Analyser or alternatively, had we been less successful in applying the
differential analyser to the exterior ballistic problem. Just think what
would have happened, if Bush, with his keen intelect and driving
enthusiasm, had started to work on digital computers in 1924.

A brief note in the Fifty-Year-Ago column of a recent issue of the
Scientific American recently that brought back to my mind the state of
development in 1933, 100 years after Babbage's original work. Let me
quote:

``The fundamental ideas of the behavioristic and Gestalt psychologies
justify attempts to construct and develop machines of a new
type---machines that think. Such machines are entirely different from
integraphs, tide calculators and the like to which the term `thinking
machine is sometimes applied. Unlike the latter they are not designed to
perform with mathematical regularity but can `learn' to vary their actions
under certain conditions.''

The writer of this note was, of course, thinking about some of the early
work that was then being done on self-organizing systems and not about
digital computers as such but the argument over the relative merits of
these two approaches was already taking shape in 1933.

Yes, we were talking about `thinking machines' in 1933 and even about the
use of electronic devices to solve problems in Artificial Intelligence
although the term Artificial Intelligence had not yet been coined and the
digital computer was still only a dream.  I am sure that our ideas at the
time were hazy in the extreme.  One is apt to remember what one should
have thought at an earlier time and not what one actually did think. On
the other hand, there is the opposing tendency to forget the long gradual
evolution that usually lies behind any new development such, for example,
as the development of the modern digital computer.

I can remember having several heated discussions with people at the Bell
Laboratories about the need for better computing facilities and about the
feasability of using the long-life vacuum tubes, that I had worked on, for
the construction of some ill defined machine that would help me in solving
some of my problems.

About this time or a little later, Stibbitz got interested in building a
complex number calculator, but he concluded that relays were cheaper and
more reliable than vacuum tubes and that they would satisfy the speed
requirements of the time.

It was not until 1946 that I was again able to think about electronic aids
to calculations and specifically about digital computers.  A combination
of circumstances, led me to forfeit the 18 year leg I had on a retirement
pension at Bell (no investiture in those days) and move to Illinois.  My
first job was to set up an experimental electron tube laboratory which I
set about doing.

My old obsession with solutions of space charge problems with complex
boundry conditions still persisted.  The computing services that we had at
Bell were no longer available to me and so I began to agitate for a
computer.  My thought, at the time was that we should buy a computer from
some outside supplier. Several organizations were promising big things at
the time.  A visit to Princton and the University of Pensilvania and then
to some of the potential suppliers was enough to convince me that most of
these entrepreneurs were talking through their hats and I therefore
proposed to the then dean of the Graduate School that we build our own
computer. This was too much for the dean so he took the usual bureaucratic
path of appointing a committee to look into my suggestion and prepare a
report for him which he simply left on his desk to gather dust.

We were fortunate in getting Louis Rideneur as our new dean who proceeded
to talk the Board of Trustees out of $110,000 for a computer within less
than a week after he learned of our need.  So we started to work.  We were
fortunate in being able to persuade Ralph Meager to head up the overall
project, and to persuade Henry Taube to come to Illinois and head up the
mathematical work.  I was busy running my Electron Device Laboratory but I
did have graduate students tumbling all over themselves trying to find
suitable thesis subjects so I put several of them to work on new computer
components, building a better cathode ray storage tube, developing a
magnetic core storage scheme, developing some special non-synchronous
adder circuits etc.

It became evident almost once that the $110,000 was not going to be nearly
enough to build the kind of computer that we felt that we needed. We
decided that we should rush ahead and build something less than the Ideal
Computer but get it built right away. We would try to do something
spectacular with this computer so that we could then go back to the Board
of Trustees and ask them for more money.

Since I had been making noises about unconventional things that one could
do with a Computer, someone, perhaps it was Taube, suggested that I was
the logical person to undertake the programming task.  My first through
was that I should fulfill Babbage's old dream of having it play chess.  I
was disheartened when I found that Claude Shannon had apparently beaten me
to the punch but then, when I had arranged to meet Claude, I learned that
he was much less far along with the problem that newspaper reports had led
me to believe.  My talk with Claude did awaken me to the difficulties that
chess presented.

Checkers was my next choice. It happened that a World's champion checker
match was to be held in a neighboring town the next spring and it seemed
quite reasonable at the time for us to put together some sort of a
computer in a few months and for me to write a checker program during this
same period of time. We then intended to challenge the winner of the
spring match and have the computer beat him.  This would give us the
publicity that we needed.  How naive can one be?

So I started to work, writing a checker program for a machine that did not
exist, using an instruction set that I was forced to create as I needed
it, writing directly in octal, and assigning fixed memory locations for my
variables, since the idea of an assembly language and of an assembler had
not yet been invented.  The situation was ideal to highlight the need for
a symbolic notation and why I did not invent one, I will never know.

Nearly two years later, I had written my first checker program but the
computer was still only a paper design. Faced with a vary bad political
situation at the University, I decided to leave Illinois and to accept a
job with IBM.  As an aside, the situation got worse, the University
president was fired, there was a big shake-up, the situation finally
improved, some government money became available, and the Illiac was
built.

But what about my checker program?  It, of course, had to be completely
rewritten, first for an interim computer called the Defence Calculator and
later for the 701 as soon as the instruction set for this machine had
begun to take shape.  This first version was still written in octal and it
ran successfully on the very first model of the 701.  It was only later
after the idea of an assembly program had been implemented that I was able
to write in an assembly language.  I was then faced with the task of
converting my running machine language program fron its digital form into
symbolic form and this forced me to write a ``Dissassembly'' program.
Incidentally, this Dissassembly program was to do yeoman duties in
converting the library of system and application programs, that had by
then been written, from octal to symbolic form so that these programs
could be adequately maintained and improved.

IBM, in those days did not take kindly to one of their engineers wasting
company time playing checkers even if it was against a machine and so much
of my work had to be done on my own time.  I had dressed my efforts up
with a certain amount of respectibility by adding a learning feature to
the program but even then it was the use of the program as a test vehicle
which could be adjusted to run continuously for any arbitrary period that
got me the machine time that I needed to try out my newer learning
routines.

Later, during the 704 days, I can remember a period when there were often
several 704's on the test floor at the same time with test crews working
two shifts a day. The test crews would arrange to let me have all the time
I wanted from 11 PM to 7 AM. I remember one occasion when I was able to
keep 4 machines running for most of the night with machine learning
experiments.

Several events occured along about this time that make it possible for me
to recall the exact time sequence of A.I. development during these early
years. Two of these events had a significant effect on the course of
future events.

The first event was the the Dartmouth Summer Research Project on
Artificial Intelligence that John Mc Carthy called together during the
summer of 1956. Only a very small group of people were involved but the
entire field of Artificial Intelligence was greatly influenced by this
meeting. The name Artificial Intelligence was, of course, coined by John
in the process of naming this project.  Never again would it be possible
for so small a group to has such a profound influence on the course of
A.I. research.

The second event was the publication in 1963 of the book ``Computers and
Thought'' edited by Feigenbaum and Feldman. This book reprinted some
twenty papers by twenty eight authors that the editors thought would
summarize or at least would be typical of the state of the art in some ten
aspects of the general field of artifical intelligence.  Most of you know
this book, I am sure, and it would be a waste of your time for me to
review it in detail at this time.  Let me say but one or two things. Most
of these papers discussed what might be called ``toy'' problems, thet is
problems that were specifically chosen because they required the
development of new A.I. principles but were, never-the-less, sufficiently
well defined that one could measure one's progress toward meeting some
desired goal.  A second feature was the sharp division of the papers into
two distinct categories, the one set that tried to look at the way that
people solved the chosen problem and ape this action by machine, and the
second group that considered the problem more or less in the abstract and
tried to invent machine methods that were tied to the unique
characteristics of the computer.  But more on these matters later.

The third and firth events, not of as earth shaking significance, but
important to me as mnemonic aids, were the publication by me of two papers,
the one which tried to summerize the state of development of AI in 1962,
and the second which made predictions as to where computers would be in 1984.

The first paper was called ``Artificial Intelligence:  A Frontier of
Automation''.  I am tempted to quote from this paper at some length since
many things that I said then could equally well be said today, but I will
forbear.  The paper ends with a statement that ``We are still in the
game-playing stage, but, as we progress in our understanding, it seems
reasonable to assume that these newer techniques will be applied to real
life with increasing frequency and that the effort devoted to games and
other toy problems will decrease.''  I will have more to say about toy
problems later because I have come to attach greater significancs to toy
problems than I did in 1962.  

As a minor aside, I revealed my own personal bias between the two
approaches to A.I. problems, the one approach that asks how people solve
such problems and the second that looks at the problem in the abstract and
tries to develop machine specific methods of solution. I revealed my bias
by headed the section that described work based on the first approach with
the phrase ``Bird-Watching'' and by heading the contrasting section with
the phrase ``Back to Aerodynamics''.

I cannot refrain from quoting another exerpt from this paper to wit:
``Progress is being made in machine learning and ...we will be able to
devise better machines or even to program existing ones so that they can
outperform man in most forms of mental activity. In fact, one suspects
that our present machines would be able to do this now were we but smart
enough to write the right kinds of programs.  The limitation is not in the
machine but in man.  Here then is a paradox. In order to make machines
which appear smarter than man, man himself must be smarter than the
machine''.

But I want to move on to the second paper that was published in 1964,
called ``The Banishment of Paper Work'' and in which I made some
predictions as to the state of affairs in 1984. I want to review my
predictions both to assess our progress and as a prelude to making some
new predictions for the year 2004.

The paper is too long to be quoted in detail but perhaps a few exerpts
will do.  In one place I say ``Given computers that are perhaps 100 to
1000 times as fast as the present day computers [the IBM PC and the Cray
are in this range], computers with large memories [and they have gotten
large although Von Neumann maintained to his dying day that 1000 words was
all that would ever be needed], computers which occupy perhaps one
one-hundredth the volume that they now do [roughly the ratio between the
704 and the IBM PC], computers that are much cheaper [the factor is about
1000] and finally computers that learn from their experience and which can
converse freely with their masters---what can we predict?''

I went on to predict a number of things, most of which have come true:
That we would see a dichotomy in the developmwnt of very large computers
acting as number crunchers and as data bases with the widespread use of
small personal computers that were quite respectible computers in their
own right but that would also serve as terminals for access to the central
data bases; That all the accounting work of the world would be done by
computer, hence the title of The Banishment of Paper-Work; And that
process control with the attending automation would have reached a very
high degree of development.

It was only in my predictions relating to artificial intelligence that I
completely missed the mark.  I stated that ``...we have good reason for
predicting that two rather basic problems will have been solved. The first
of these has to do with learning... .The second difficulty resides in the
nature of the instructions which must be given [to the computer]. ... in
short one cannot converse with a computer.''  I still believe that I was
right in assessing these two problem as the two most basic problems that
stood in the way of progress but I was completely wrong in my belief that
they would be solved in twenty years.  So what has really happened?

The situation with respect to machine learning is particularly
distressing.  What does The Handbook of Artificial Intelligence have to
say about the matter?  I quote ``But in general, learning is not
noticeable in AI systems.''  This stricture, appearing in volume 1, is
somewhat softened in volume 3 where some 388 pages are devoted to Learning
and Inductive Inference, as compared with a total of 1300 pages for the
three volumes, a not unreasonable fraction of the total discussion.  Much
of this work seems to be quite good.  It is significant, however, that an
enbarrasingly large amount of space is used to describe work that was done
some twenty years ago and work that should have long since been
superseded. I am safe in saying this because I am referring primarily to
my own work. The stricturs remains:  Learning is, in general, still not
noticable in AI systems!

Much more work has been done on the problem of man-machine communication,
and in making use of this work in AI systems, if we include, under this
umbrella, the work on menu-driven techniques, the use of questioning
techniques in which the computer restricts the range of the human
responses by asking questions, the work on character reading, and the more
profound work on speech recognition, and on language understanding.
However, I look on much of the work on techniques of communication as
pallative measures, useful to be sure, but not the ultimate solution.
It is here that I have a real quarrel to pick with many of you,
I believe that the desired progress in man-machine communication
will not be made until it is treated as a learning problem.

To illustrate my point, let me raise the question as to how a child learns
to communicate or to talk.  You will note a bias in the very way that we
ask the question. The question is: How does a child learn to talk? not,
How do we teach a child to talk?  Children learn to talk willy-nilly,
whether there is a conscious effort on the part of their parents to teach
them or not.  We explain this by saying that children are born with an
instinctive desire to learn, and with the ability to learn.  Computers do
not have instinctive desires but they are obedient servants and if we tell
then to learn and if we tell them how to learn, they will do it. So all we
have to do is to tell them how, either by wiring this knowledge into their
very structure or by supplying them with the necessary software.  This we
have failed to do.

In fact, I look on learning as the central problem of AI research, and I
believe that our failure to address this problem with enough vigor is the
chief reason why the entire field of AI has failed to come up to my
expectations.  Oh, of course, you will say that my expectations were
highly inflated and based on a gross failure on my part to realize how
difficult the problems really are. The fact remains, that my 1963 AI
predictions are nearly as far from realization today as they were in 1963!

ON the positive side, I believe that there are signs of a revival of interest in
machine learning. Perhaps, with this prod from me there will be a real revival
of interest in this type of research.

But why has machine learning been so neglected?
I can mention quite a few things that may have had an effect on
the directions that AI research has been taking.

1. Until recently substantially all of the work in AI was done at
Universities.  This is not bad in itself but most Computer Science
Departments have been growing by leaps and bounds and much of the research
is being done by qraduate students who try to pick theses subjects that
they think will lead to a degree with the least amount of work and in a
reasonable short period of time.  The faculty members themselves are
frequently overloaded with students and have little time for comtinuing
personal research projects.

2. Sponsoring agents are also interested in quick results. People's memory
may be too short to remember but I objected strongly to the 5 year
schedule set by ARPA for a research effort on speech recognition when a
ten year programm was certainly called for.  This, in no way, casts
aspersions on the quality of the work that was done but never-the-less the
goals were clearly gauged by the need to show positive results in the
alloted time span.  Time constraints are frequently beneficial as applied
to engineering and product development but they seldom yield results when
applied to research.

I can also remember a less happy crash program in language translation at
an earlier period which consumed a lot of effort and which got exactly
nowhere because of time constraints.

3. Industry, when they do become interested in AI tend to opt for a
program in Knowledge Engineering rather than for AI research.  Not that
Knowledge Engineering is a bad thing in its own right but all too often
the very people who are most capable of doing real research are siphoned
off be the lure of big money.

4. The tendency to switch from research to knowledge engineering is not
confined to industry.  AI research personnel in universities frequently
start out doing worth-while research but they then tend to become enamored
with the uses to which their research may be put and veer off into
applications.  Again, there is nothing wrong with this, if the worker is
fully aware of what he is doing, but it does deplete the ranks and
frequently just at the time that the going is getting rough and when more
effort rather than less is needed.

5. There are fads in AI as in any other field.  Fads lead to duplication
of effort and the temporary neglect of certain types of work that will
later prove to be a bottleneck.  On the other hand, fads result in intense
rivalry which puts everyone on their toes and the net effect may be good.

Perhaps I have said enough.  Think on these things.

It is time for me to make my predictions for 2004 and to sit down.  I feel
somewhat more confident in making predictions about computers in general,
where my track record is pretty good than I do in making predictions about
AI, so I will start there.

I predict:

1. The present trend in miniaturization will continue and two distinct
types of small computers will be available, a home computer roughly
equivalent to the present day personal computer and a much smaller pocket
computer.

2. The home computer will have at least one million bytes of memory.  An
acceptable form of non-volatile storage will have been developed, for this
purpose, although it will have to compete with volatile storage schemes
that consume so little power that they will normally be left on at all
times.  This computer will be capable of communicating with the many large
data banks that will be available and with other personal computers, all,
of course, under suitable protocols which attend to such matters as file
integrety and secrecy.  It will also be capable of communicating via radio
with one's pocket computer, as will be described a little later.

The home computer will be capable of receiving and transmitting pictorial
material as well as textual material, and it will function as a
video-telephone when this is desired. Since there will still be an
occasional need for printed output this computer will produce book quality
output and adequate paper handling facilities will be available, probably
at an extra cost, to enable one to receive printed and pictorial material,
the morning paper perhaps, without having to be present at the computer
when it is arriving.

3. The pocket computer will be a rather modest computer in its own right
with, perhaps, sixteen thousand bytes of memory.  Its chief function will
be to communicate by radio with one's home computer or one's office
computer either directly or via an exchange system when far away. It will,
in effect, be a smart terminal.  It will also double as a telephone so
that one will be able to speak with anyone any place in the world who has
a phone connection. Video-phone equipment will still be rather bulky and
so most people will not include this facility in their pocket computers.

4. Large centrally located computers will still be used for large
calculations. They will have reached a practical limit in size, however,
as set by velocity of light limitations, and the general trend will be to
either partition large problems so that they can be solved in pieces or to
tie many separate machines together on a temporary basis for truely large
problems with the separate machine operated as separate entities most of
the time.  Newer methods of storage will have made it possible to
construct truely large memories so that this will not be a bottleneck.

5. The development of large data-base computers will have truely happened,
rather later than I predicted in 1963.  As I said then, ``One will be able
to browse through the fiction section of the central library, enjoy an
evening's light entertainment viewing any movie that has ever been
produced for a suitable fee, of course, since Hollywood will still be
commercial) or inquire as to the previous day's production figures for tin
in Bolivia---all for the asking via one's remote terminal''. I am less
sanguine about the next prediction that I then made, to the effect that:
``Libraries for books will have ceased to exist in the more advanced
countries except for a few which will be perserved as museums'' although,
as I then said ,``and most of the world's knowledge will be in machine
readable form''. Having lived with computers for low these many years,
with a terminal in my office and one in my study at home, I still find
printed books to be very comforting and I now wonder if this feeling will
somehow keep libraries in existance in spite of the fact that, and I again
quote, ``... the storage problem will make it imperative that a more
condensed form of recording be used,..''

The Dartmouth Summer Research Project on Artificial Intelligence occupied
the summer of 1956.  However, only a few people were there the whole
time - perhaps Minsky and me, perhaps only me.  Some came for only a few
days.  I can't imagine any way
to get a complete list.  Here is a partial list, and you might try to supplement
it by asking Minsky.

John McCarthy
Marvin Minsky
Oliver Selfridge
Nathaniel Rochester
Claude Shannon
Raymond Solomonoff
Julian Bigelow
Arthur Samuel
Alex Bernstein
Allen Newell
Herbert Simon

add
Herb Gelernter
Oliver Selfridge

Three things can be said about this collection in retrospect.  In the
first place it uniquely defined the field as it then existed.
Secondly it proved to be the hallmark by which future publications were
judged for some time. Finally, it influenced the course of development in
the entire field for a surprisingly long period of time, at least until
the assendency of the expert system approach to A.I. that Feigenbaum and
his students fostered and developed.

When I first started to think about what I might say at this gathering, I
thought that it would be nice to catagorize the various stages in the
evolution of A.I..  I was unsuccessful in doing this.  The evolution does
not seem to have been an orderly one.


Perhaps this is the place to try to catagorize the various stages in the
evolution of A.I..  One must be careful, however, to remember that this
evolution has not necessarily been an orderly one and that there has been
no very sharp division in periods when one type of ressearch predominated.
As an example, the very early work in A.I. was mainly concerned with game
playing, However, interest in game playing has never deminished and today
there are more people interested in ame playing by machine than ever
before.  Never-the-less one can characterize the period from perhaps 1949
to 1959 as the game-playing era.  Perhaps a clearer way to catagorize the
situation is to say that the various aspects of A.I. each start to be
dominanent at some specified time, go through an early adolescence when
new ideas are free for the finding, then to a period of early adulthood
when the early exuberance of youth is tempered with a more mature
realization as to the difficulties of making future progress then finally, to full
maturiety where the effort turns toward making some practical use of the
chosen aspect of A.I..  The situation has progressed today until much of the
work that is being done in game playing is not A.I. at all but is simply the
programming of computers to do things that sell.

Which brings me to one of the points that I want to make tonight and that
is to plead with all of you to sit back periodically and speculate on
where we are going. It may well be that, preoccupied with current problems
we fail to see that the long range solution may lie down quite a different
path to the one we are currently taking.

But I had intended to skip a century which brings us to 1933.  By that time I
had become interested in the design of electron tubes and was working on
microwave tubes. A short paragraph in the Fifty Years Ago column of a
recent issue of the Scientific American brings this period back to my
memory.  Let me quote it, in part. 

Where do I start?  When computers are involved,it is customary to start
with Babbage.  How many of you remember that Babbage proposed to program
his Analytic Engine to play chess?  He undoubtedly gave some thought to
the methods that he would use and it is not unreasonable to assume that he
entertained ``A I'' thoughts in this regard. The technology of his time
was not yet up to the necessary state and so Babbage never was able to
complete the Analytic Engine and he died an embittered and disappointed
man. Never-the-less if we want to date the beginning of A.I. we should go
back to 1833 or 1834.

Having paid homage to Babbage, we can now skip a century, which brings us
to 1933 plus or minus. There were, of course, many developments during
these ensuing years that made the modern digital computer possible. In
retrospect, it seems that the development of the electron tube and indeed
the entire field of electronics had progressed far enough, certainly by
1933, the hundredth anniversary of Babbages's invention so that the
digital computer could have been developed well before the war years.  Why
it was not developed is an interesting story.

I could go on and tell how IBM had decided that they would have to make
their own vacuum tubes and how they had set up the start of this facility.
One of the reasons that they were so quick to offer me a job and to agree
to my figure for a starting salary was that they were having real trouble
in producing tubes that would come anywhere near meeting their desired
standards and they thought that I could get them out of their
difficulties.  This was 1949, remember and the invention of the Transistor
had by then been made public the Bell Telephone Laboratories.  It was
obvious to me that the transistor would soon replace the vacuum tube in
computers and that this was no time to set up a vacuum tube factory.  I
was not certain enough of my position to act at once and so I did the best
I could to straighten things out and to divert some of the available
manpower to making some better Williams Storage tubes and a few other
special purpose tubes that we did need and that were not available on the
market.

Finally, the time came when I decided that I could delay no longer and
that I had to convince the top management that we should phase out the
work on vacuum tubes and set up a solid state laboratory in its place.
Top management, in this case meant Mr T. J.  Watson Sr. who it seems had
authorized my hiring as the best available man to get their tube factory
out of its difficulties.  As he viewed it, I was falling down on the job
and now I was trying to cover my failure to perform by abolishing the
entire operation.  It was nip and tuck as to whether I was to be fired
forthwith.  Well, I wasn't, although I spent a full week attending
meetings with Tom Watson Jr. and with others that he called in to question
me.

We set up the solid state laboratory and started to make transistors.  I
was fortunate in my choice of people to man this facility and they were
soon doing creditable work. The rest is history although I had one final
clash with management to prevent them from directing us to switch to
transistors too soon before the barrier transistor had been made available
and had been found to have an acceptable life characteristic.